Multivariate Probability Distributions Joint Probability Distribution If X 1 , … , X n X_1,\ldots, X_n X 1 , … , X n are discrete random variables with P [ X 1 = x 1 , X 2 = x 2 , … , X n = x n ] = p ( x 1 , … , x n ) P[X_1 = x_1, X_2 = x_2,\ldots, X_n = x_n] = p(x_1,\ldots, x_n) P [ X 1 = x 1 , X 2 = x 2 , … , X n = x n ] = p ( x 1 , … , x n ) , where x 1 , … , x n x_1, \ldots, x_n x 1 , … , x n are numbers, then the function p p p is the joint probability mass function (p.m.f.) for the random variables X 1 , … , X n X_1, \ldots, X_n X 1 , … , X n .
For continuous random variables Y 1 , … , Y n Y_1, \ldots, Y_n Y 1 , … , Y n , a function f f f is called the joint probability density function if P [ Y ∈ A ] = ∈ ∫ … ∫ f ( y 1 , … y n ) d y 1 d y 2 ⋯ d y n P [Y\in {A}] =\displaystyle\in\displaystyle\int\ldots\displaystyle\int f(y_1,\ldots y_n)dy_1dy_2 \cdots dy_n P [ Y ∈ A ] =∈ ∫ … ∫ f ( y 1 , … y n ) d y 1 d y 2 ⋯ d y n .
Details If X 1 , … , X n X_1, \ldots, X_n X 1 , … , X n are discrete random variables with P [ X 1 = x 1 , X 2 = x 2 , … , X n = x n ] = p ( x 1 , … , x n ) P[X_1 = x_1, X_2 = x_2,\ldots, X_n = x_n] = p(x_1,\ldots, x_n) P [ X 1 = x 1 , X 2 = x 2 , … , X n = x n ] = p ( x 1 , … , x n ) where x 1 … x n x_1 \ldots x_n x 1 … x n are numbers, then the function p p p is the joint probability mass function (p.m.f.) for the random variables X 1 , … , X n X_1, \ldots, X_n X 1 , … , X n .
For continuous random variables Y 1 , … , Y n Y_1, \ldots, Y_n Y 1 , … , Y n , a function f f f is called the joint probability density function if P [ Y ∈ A ] = ∫ ∫ … ∫ ⏟ A f ( y 1 , … y n ) d y 1 d y 2 ⋯ d y n P [Y\in {A}] = \underbrace{\displaystyle\int\displaystyle\int\ldots\displaystyle\int}_{A} f(y_1,\ldots y_n)dy_1dy_2 \cdots dy_n P [ Y ∈ A ] = A ∫∫ … ∫ f ( y 1 , … y n ) d y 1 d y 2 ⋯ d y n .
Note that if X 1 , … , X n X_1, \ldots, X_n X 1 , … , X n are independent and identically distributed, each with p.m.f.
p p p , then p ( x 1 , x 2 , … , x n ) = q ( x 1 ) q ( x 2 ) … q ( x n ) p(x_1, x_2, \ldots, x_n) = q(x_1)q(x_2)\ldots q(x_n) p ( x 1 , x 2 , … , x n ) = q ( x 1 ) q ( x 2 ) … q ( x n ) , i.e. P [ X 1 = x 1 , X 2 = x 2 , … , X n = x n ] = P [ X 1 = x 1 ] P [ X 2 = x 2 ] … P [ X n = x n ] P [X_1 = x_1, X_2 = x_2,\ldots, X_n= x_n] = P [X_1 = x_1] P[X_2 = x_2]\ldots P[X_n= x_n] P [ X 1 = x 1 , X 2 = x 2 , … , X n = x n ] = P [ X 1 = x 1 ] P [ X 2 = x 2 ] … P [ X n = x n ] .
Note also that if A A A is a set of possible outcomes ( A ⊆ R n ) (A \subseteq \mathbb{R}^n) ( A ⊆ R n ) , then we have
P [ X ∈ A ] = ∑ ( x 1 , … , x n ) ∈ A p ( x 1 , … , x n ) P[X \in {A}] = \displaystyle\sum_{(x_1,\ldots,x_n)\in A} p(x_1,\ldots, x_n) P [ X ∈ A ] = ( x 1 , … , x n ) ∈ A ∑ p ( x 1 , … , x n )
Examples An urn contains blue and red marbles, which are either light or heavy.
Let X X X denote the color and Y Y Y the weight of a marble, chosen at random:
X ∖ Y L H Total B 5 6 11 R 7 2 9 T T 12 8 20 \begin{array}{c c c c} \hline \\ X \setminus Y & \text{L} & \text{H} & \text{Total} \\ B & 5 & 6 & 11 \\ R & 7 & 2 & 9 \\ TT & 12 & 8 & 20 \\ \hline \end{array} X ∖ Y B R TT L 5 7 12 H 6 2 8 Total 11 9 20 We have P [ X = " ′ b " ′ , Y = " l " ′ ] = 5 20 P[X="'b"', Y ="l"'] = \displaystyle\frac{5}{20} P [ X = " ′ b " ′ , Y = " l " ′ ] = 20 5 .
The joint p.m.f.
is
X ∖ Y L H Total B 5 20 6 20 11 20 R 7 20 2 20 9 20 Total 12 20 8 20 1 \begin{array}{c c c c} \hline \\ X \setminus Y & \text{L} & \text{H} & \text{Total} \\ \text{B} & \displaystyle\frac{5}{20} & \displaystyle\frac{6}{20} & \displaystyle\frac{11}{20} \\ \text{R} & \displaystyle\frac{7}{20} & \displaystyle\frac{2}{20} & \displaystyle\frac{9}{20} \\ \text{Total} & \displaystyle\frac{12}{20} & \displaystyle\frac{8}{20} & 1 \\ \hline \end{array} X ∖ Y B R Total L 20 5 20 7 20 12 H 20 6 20 2 20 8 Total 20 11 20 9 1 The Random Sample A set of random variables X 1 , … , X n X_1, \ldots, X_n X 1 , … , X n is a random sample if they are independent and identically distributed.
A set of numbers x 1 , … , x n x_1, \ldots, x_n x 1 , … , x n are called a random sample if they can be viewed as an outcome of such random variables.
Details Samples from populations can be obtained in a number of ways.
However, to draw valid conclusions about populations, the samples need to obtained randomly.
In random sampling , each item or element of the population has an equal and independent chance of being selected.
A set of random variables X 1 , … , X n X_1, \ldots, X_n X 1 , … , X n is a random sample if they are independent and identically distributed.
If a set of numbers x 1 … x n x_1 \ldots x_n x 1 … x n can be viewed as an outcome of random variables, these are called a random sample .
Examples If X 1 , … , X n ∼ U n f ( 0 , 1 ) X_1, \ldots, X_n \sim Unf(0,1) X 1 , … , X n ∼ U n f ( 0 , 1 ) , independent and identically distributed, i.e. X 1 X_1 X 1 and X n X_n X n are independent and each have a uniform distribution between 0
and 1
.
Then they have a joint density which is the product of the densities of X 1 X_1 X 1 and X n X_n X n .
Given the data in the above figure and if x 1 , x 2 ∈ R x_1, x_2 \in \mathbb{R} x 1 , x 2 ∈ R
f ( x 1 , x 2 ) = f 1 ( x 1 ) f 2 ( x 2 ) = { 1 if 0 ≤ x 1 , x 2 ≤ 1 0 elsewhere f(x_1, x_2) = f_1(x_1) f_2(x_2) = \begin{cases} 1 & \text{if } 0 \leq x_1, x_2 \leq 1 \\ 0 & \text{elsewhere} \end{cases} f ( x 1 , x 2 ) = f 1 ( x 1 ) f 2 ( x 2 ) = { 1 0 if 0 ≤ x 1 , x 2 ≤ 1 elsewhere Toss two dice independently, and let X 1 , X 2 X_1, X_2 X 1 , X 2 denote the two (future) outcomes.
Then
P [ X 1 = x 1 , X 2 = x 2 ] = { 1 36 if 1 ≤ x 1 , x 2 ≤ 6 0 elsewhere P[X_1 = x_1, X_2 = x_2] = \begin{cases} \displaystyle\frac{1}{36} & \text{if } 1 \leq x_1, x_2 \leq 6 \\ 0 & \text{elsewhere} \end{cases} P [ X 1 = x 1 , X 2 = x 2 ] = ⎩ ⎨ ⎧ 36 1 0 if 1 ≤ x 1 , x 2 ≤ 6 elsewhere is the joint p.m.f
.
The Sum of Discrete Random Variables Details Suppose X
and Y
are discrete random values with a probability mass function p
.
Let Z=X+Y
.
Then
P ( Z = z ) = ∑ { ( x , y ) : x + y = z } p ( x , y ) \begin{aligned} P(Z=z) & = &\displaystyle\sum_{\{ (x,y): x+y=z\}} p(x,y)\end{aligned} P ( Z = z ) = {( x , y ) : x + y = z } ∑ p ( x , y )
Examples ( X , Y ) = outcomes (X,Y) = \text{outcomes} ( X , Y ) = outcomes ,
[,1] [,2] [,3] [,4] [,5] [,6] [1,] 2 3 4 5 6 7 [2,] 3 4 5 6 7 8 [3,] 4 5 6 7 8 9 [4,] 5 6 7 8 9 10 [5,] 6 7 8 9 10 11 [6,] 7 8 9 10 11 12
P [ X + Y = 7 ] = 6 36 = 1 6 P[X+Y =7] =\displaystyle\frac{6}{36}=\displaystyle\frac{1}{6} P [ X + Y = 7 ] = 36 6 = 6 1
Because there are a total of 36 36 36 equally likely outcomes and 7 7 7 occurs six times this means that P [ X + Y = 7 ] = 1 6 P[X + Y = 7] =\displaystyle\frac{1}{6} P [ X + Y = 7 ] = 6 1 .
Also
P [ X + Y = 4 ] = 3 36 = 1 12 P[X+Y = 4] = \displaystyle\frac{3}{36} = \displaystyle\frac{1}{12} P [ X + Y = 4 ] = 36 3 = 12 1
The Sum of Two Continuous Random Variables If X X X and Y Y Y are continuous random variables with joint p.d.f.
f f f and Z = X + Y Z=X+Y Z = X + Y , then we can find the density of Z Z Z by calculating the cumulative distribution function.
Details If X X X and Y Y Y are c.r.v.
with joint p.d.f.
f f f and Z = X + Y Z=X+Y Z = X + Y , then we can find the density of Z Z Z by first finding the cumulative distribution function
P [ Z ≤ z ] = P [ X + Y ≤ z ] ∫ ∫ { ( x , y ) : x + y ≤ z } f ( x , y ) d x d y P[Z \leq z]=P[X+Y \leq z]\displaystyle\int\displaystyle\int_{\{(x,y):x+y \leq z\}} f(x,y)dxdy P [ Z ≤ z ] = P [ X + Y ≤ z ] ∫ ∫ {( x , y ) : x + y ≤ z } f ( x , y ) d x d y
Examples If X , Y ∼ U n f ( 0 , 1 ) X,Y \sim Unf(0,1) X , Y ∼ U n f ( 0 , 1 ) , independent and Z = X + Y Z=X+Y Z = X + Y then
P [ Z ≤ z ] = { 0 for z ≤ 0 z 2 2 for 0 < z < 1 1 for z > 2 1 − ( 2 − z ) 2 2 for 1 < z < 2 P[Z \leq z] = \begin{cases} 0 & \text{for } z \leq 0 \\ \displaystyle\frac{z^2}{2} & \text{for } 0 < z < 1 \\ 1 & \text{for } z > 2 \\ 1-\displaystyle\frac{(2-z)^2}{2} & \text{for } 1 < z < 2 \end{cases} P [ Z ≤ z ] = ⎩ ⎨ ⎧ 0 2 z 2 1 1 − 2 ( 2 − z ) 2 for z ≤ 0 for 0 < z < 1 for z > 2 for 1 < z < 2 the density of z z z becomes
g ( z ) = { z for 0 < z ≤ 1 2 − z for 1 < z ≤ 2 0 for elsewhere g(z) = \begin{cases} z & \text{for } 0 < z \leq 1 \\ 2-z & \text{for } 1 < z \leq 2 \\ 0 & \text{for } \text{elsewhere} \end{cases} g ( z ) = ⎩ ⎨ ⎧ z 2 − z 0 for 0 < z ≤ 1 for 1 < z ≤ 2 for elsewhere To approximate the distribution of Z = X + Y Z=X+Y Z = X + Y where X , Y ∼ U n f ( 0 , 1 ) X,Y \sim Unf(0,1) X , Y ∼ U n f ( 0 , 1 ) independent and identically distributed, we can use Monte Carlo simulation.
So, generate 10.000
pairs, set them up in a matrix and compute the sum.
Means and Variances of Linear Combinations of Independent Random Variables If X X X and Y Y Y are random variables and a , b ∈ R a,b\in\mathbb{R} a , b ∈ R , then
E [ a X + b Y ] = a E [ X ] + b E [ Y ] E[aX+bY] = aE[X]+bE[Y] E [ a X + bY ] = a E [ X ] + b E [ Y ]
Details If X X X and Y Y Y are random variables, then
E [ X + Y ] = E [ X ] + E [ Y ] E[X+Y] = E[X]+E[Y] E [ X + Y ] = E [ X ] + E [ Y ]
i.e. the expected value of the sum is just the sum of the expected values.
The same applies to a finite sum, and more generally
E [ ∑ i = 1 n a i X i ] = ∑ i = 1 n a i E [ X i ] E\left[\displaystyle\sum_{i=1}^{n} a_i X_i\right] = \displaystyle\sum_{i=1}^{n} a_i E[X_i] E [ i = 1 ∑ n a i X i ] = i = 1 ∑ n a i E [ X i ]
when X i , … , X n X_i,\dots,X_n X i , … , X n are random variables and a 1 , … , a n a_1,\dots,a_n a 1 , … , a n are numbers (if the expectations exist).
If the random variables are independent, then the variance also add
V a r [ X + Y ] = V a r [ X ] + V a r [ Y ] Var[X+Y] = Var[X] + Var[Y] Va r [ X + Y ] = Va r [ X ] + Va r [ Y ]
and
V a r [ ∑ i = 1 n a i X i ] = ∑ i = 1 n a i 2 V a r [ X i ] Var\left[\displaystyle\sum_{i=1}^{n} a_i X_i\right] = \displaystyle\sum_{i=1}^{n} a_i^2 Var[X_i] Va r [ i = 1 ∑ n a i X i ] = i = 1 ∑ n a i 2 Va r [ X i ]
Examples X , Y ∼ U n f ( 0 , 1 ) X,Y \sim Unf(0,1) X , Y ∼ U n f ( 0 , 1 ) , independent and identically distributed, then
E [ X + Y ] = E [ X ] + E [ Y ] = ∫ 0 1 x ⋅ 1 d x ∫ 0 1 x ⋅ 1 d x = [ 1 2 x 2 ] 0 1 + [ 1 2 x 2 ] 0 1 = 1 E[X+Y]=E[X] + E[Y] =\displaystyle\int_0^1 x\cdot 1dx\displaystyle\int_0^1 x\cdot 1dx = \left[\displaystyle\frac{1}{2}x^2\right]_0^1+\left[\displaystyle\frac{1}{2}x^2\right]_0^1=1 E [ X + Y ] = E [ X ] + E [ Y ] = ∫ 0 1 x ⋅ 1 d x ∫ 0 1 x ⋅ 1 d x = [ 2 1 x 2 ] 0 1 + [ 2 1 x 2 ] 0 1 = 1
Let X , Y ∼ N ( 0 , 1 ) X,Y\sim N(0,1) X , Y ∼ N ( 0 , 1 ) .
Then E [ X 2 + Y 2 ] = 1 + 1 = 2 E[X^2+Y^2] = 1+1=2 E [ X 2 + Y 2 ] = 1 + 1 = 2 .
Means and Variances of Linear Combinations of Measurements If x 1 , … , x n x_1,\dots,x_n x 1 , … , x n and y 1 , … , y n y_1,\dots,y_n y 1 , … , y n are numbers, and we set
z i = x i + y i z_i=x_i + y_i z i = x i + y i
w i = a x i w_i=ax_i w i = a x i
where a > 0 a>0 a > 0 , then
z ‾ = 1 n ∑ i = 1 n z i = x ‾ + y ‾ \overline{z} = \displaystyle\frac{1}{n} \displaystyle\sum_{i=1}^{n} z_i= \overline{x} + \overline{y} z = n 1 i = 1 ∑ n z i = x + y
w ‾ = a x ‾ \overline{w}= a\overline{x} w = a x
s w 2 = 1 n − 1 ∑ i = 1 n ( w i − w ‾ ) 2 s_w^2=\displaystyle\frac{1}{n-1}\displaystyle\sum_{i=1}^{n}(w_i-\overline{w})^2 s w 2 = n − 1 1 i = 1 ∑ n ( w i − w ) 2
= 1 n − 1 ∑ i = 1 n ( a x i − a x ‾ ) 2 = \displaystyle\frac{1}{n-1}\displaystyle\sum_{i=1}^{n}(ax_i-a\overline{x})^2 = n − 1 1 i = 1 ∑ n ( a x i − a x ) 2
= a 2 s x 2 = a^2s_x^2 = a 2 s x 2
and
s w = a s x s_w=as_x s w = a s x
Examples We set:
a < -3 x <- c(1:5) y <- c(6:10)
Then:
z <- x+y w <- a*x n <-length(x)
Then z ‾ \overline{z} z is:
> (sum(x)+sum(y))/n [1] 11 > mean(z) [1] 11
and w ‾ \overline{w} w becomes:
> a*mean(x) [1] 9 > mean(w) [1] 9
and s w 2 s_w^2 s w 2 equals:
> sum((w-mean(w))^2))/(n-1) [1] 22.5 > sum((a*x - a*mean(x))^2)/(n-1) [1] 22.5 > a^2*var(x) [1] 22.5
and s w s_w s w equals:
> a*sd(x) [1] 4.743416 > sd(w) [1] 4.743416
The Joint Density of Independent Normal Random Variables If Z 1 , Z 2 ∼ N ( 0 , 1 ) Z_1, Z_2 \sim N(0,1) Z 1 , Z 2 ∼ N ( 0 , 1 ) are independent then they each have density
ϕ ( x ) = 1 2 π e − x 2 2 , x ∈ R \phi(x)=\displaystyle\frac{1}{\sqrt{2\pi}}e^{-\displaystyle\frac{x^2}{2}},x\in\mathbb{R} ϕ ( x ) = 2 π 1 e − 2 x 2 , x ∈ R
and the joint density is the product f ( z 1 , z 2 ) = ϕ ( z 1 ) ϕ ( z 2 ) f(z_1,z_2)=\phi(z_1)\phi(z_2) f ( z 1 , z 2 ) = ϕ ( z 1 ) ϕ ( z 2 ) or
f ( z 1 , z 2 ) = 1 ( 2 π ) 2 e − z 1 2 2 − z 2 2 2 f(z_1,z_2)=\displaystyle\frac{1}{(\sqrt{2\pi})^2} e^{\displaystyle\frac{-z_1^2}{2}-\displaystyle\frac{z_2^2}{2}} f ( z 1 , z 2 ) = ( 2 π ) 2 1 e 2 − z 1 2 − 2 z 2 2
Details If X ∼ N ( μ 1 , σ 1 2 ) X\sim N (\mu_1,\sigma_1^2) X ∼ N ( μ 1 , σ 1 2 ) and Y ∼ N ( μ 2 , σ 2 2 ) Y\sim N(\mu_2, \sigma_2^2) Y ∼ N ( μ 2 , σ 2 2 ) are independent, then their densities are:
f X ( x ) = 1 2 π σ 1 e − ( x − μ 1 ) 2 2 σ 1 2 f_X (x) = \displaystyle\frac{1}{\sqrt{2\pi}\sigma_1} e^{\displaystyle\frac{-(x-\mu_1)^2}{2\sigma_1^2}} f X ( x ) = 2 π σ 1 1 e 2 σ 1 2 − ( x − μ 1 ) 2
and:
f Y ( y ) = 1 2 π σ 2 e − ( y − μ 2 ) 2 2 σ 2 2 f_Y(y) = \displaystyle\frac{1}{\sqrt{2\pi}\sigma_2} e^{\displaystyle\frac{-(y-\mu_2)^2}{2\sigma_2^2}} f Y ( y ) = 2 π σ 2 1 e 2 σ 2 2 − ( y − μ 2 ) 2
and the joint density becomes:
1 2 π σ 1 σ 2 e − ( x − μ 1 ) 2 2 σ 1 2 − ( y − μ 2 ) 2 2 σ 2 2 \displaystyle\frac{1}{2\pi\sigma_1\sigma_2} e^{-\displaystyle\frac{(x-\mu_1)^2}{2\sigma_1^2}-\displaystyle\frac{(y-\mu_2)^2}{2\sigma_2^2}} 2 π σ 1 σ 2 1 e − 2 σ 1 2 ( x − μ 1 ) 2 − 2 σ 2 2 ( y − μ 2 ) 2
Now, suppose X 1 , … , X n ∼ N ( μ , σ 2 ) X_1,\ldots,X_n\sim N(\mu,\sigma^2) X 1 , … , X n ∼ N ( μ , σ 2 ) are independent and identically distributed, then
f ( x ‾ ) = 1 ( 2 π ) n 2 σ n e − ∑ i = 1 n ( x i − μ ) 2 a σ 2 f(\underline{x})=\displaystyle\frac{1}{(2\pi)^{\displaystyle\frac{n}{2}}\sigma^n} e^{-\displaystyle\sum^{n}_{i=1} \displaystyle\frac{(x_i-\mu)^2}{a\sigma^2}} f ( x ) = ( 2 π ) 2 n σ n 1 e − i = 1 ∑ n a σ 2 ( x i − μ ) 2
is the multivariate normal density in the case of independent and identically distributed variables.
More General Multivariate Probability Density Functions Examples Suppose X X X and Y Y Y have the joint density
f ( x , y ) = { 2 0 ≤ y ≤ x ≤ 1 0 otherwise f(x,y) = \begin{cases} 2 & \text{ } 0\leq y \leq x \leq 1 \\ 0 & \text{ otherwise} \end{cases} f ( x , y ) = { 2 0 0 ≤ y ≤ x ≤ 1 otherwise First notice that
∫ R ∫ R f ( x , y ) d x d y ∫ x = 0 1 ∫ y = 0 x 2 d y d x = ∫ 0 1 2 x d x = 1 \displaystyle\int_{\mathbb{R}}\displaystyle\int_{\mathbb{R}}f(x,y)dxdy\displaystyle\int_{x=0}^{1}\displaystyle\int_{y=0}^x2dydx = \displaystyle\int_0^12xdx = 1 ∫ R ∫ R f ( x , y ) d x d y ∫ x = 0 1 ∫ y = 0 x 2 d y d x = ∫ 0 1 2 x d x = 1
so f f f is indeed a density function.
Now, to find the density of X X X , we first find the c.d.f.
of X X X
First note that for a < 0 a<0 a < 0 we have P [ X ≤ a ] = 0 P[X\leq a]=0 P [ X ≤ a ] = 0 , but, if a ≥ 0 a\geq 0 a ≥ 0 , we obtain
F X ( a ) = P [ X ≤ a ] ∫ x 0 a ∫ y = 0 x 2 d y d x = [ x 2 ] 0 a = a 2 F_X(a)=P[X\leq a]\displaystyle\int_{x_0}^a \displaystyle\int_{y=0}^x2dydx=[x^2]_0^a=a^2 F X ( a ) = P [ X ≤ a ] ∫ x 0 a ∫ y = 0 x 2 d y d x = [ x 2 ] 0 a = a 2
The density of X X X is therefore
f X ( x ) = d F ( x ) d x { 2 x 0 ≤ x ≤ 1 0 otherwise f_X(x) = \displaystyle\frac{dF(x)}{dx} \begin{cases} 2x & \text{ } 0\leq x \leq 1 \\ 0 & \text{ otherwise} \end{cases} f X ( x ) = d x d F ( x ) { 2 x 0 0 ≤ x ≤ 1 otherwise Handout If f : R n → R f: \mathbb{R}^n\rightarrow\mathbb{R} f : R n → R is such that P [ X ∈ A ] = ∫ A … ∫ f ( x 1 , … , x n ) d x 1 ⋯ d x n P[X \in A] =\displaystyle\int_A\ldots\displaystyle\int f(x_1,\ldots, x_n)dx_1\cdots dx_n P [ X ∈ A ] = ∫ A … ∫ f ( x 1 , … , x n ) d x 1 ⋯ d x n and f ( x ) ≥ 0 f(x)\geq 0 f ( x ) ≥ 0 for all x ‾ ∈ R n \underline{x}\in \mathbb{R}^n x ∈ R n , then f f f is the joint density of
X = ( X 1 ⋮ X n ) \mathbf{X}= \left( \begin{array}{ccc} X_1 \\ \vdots \\ X_n \end{array} \right) X = ⎝ ⎛ X 1 ⋮ X n ⎠ ⎞ If we have the joint density of some multidimensional random variable X = ( X 1 , … , X n ) X=(X_1,\ldots,X_n) X = ( X 1 , … , X n ) given in this manner, then we can find the individual density functions of the X i X_i X i 's by integrating the other variables.